131 research outputs found

    Quadratic Optimization for Nonsmooth Optimization Algorithms: Theory and Numerical Experiments

    Get PDF
    Nonsmooth optimization arises in many scientific and engineering applications, such as optimal control, neural network training, and others. Gradient sampling and bundle methods are two ef- ficient types of algorithms for solving nonsmooth optimization problems. Quadratic optimization (commonly referred to as QP) problems arise as subproblems in both types of algorithms. This thesis introduces an algorithm for solving the types of QP problems that arise in such methods. The proposed algorithm is inspired by one proposed in a paper written by Krzysztof C. Kiwiel in the 1980s. Improvements are proposed so that the algorithm may solve problems with addi- tional bound constraints, which are often required in practice. The solver also allows for general quadratic terms in the objective. Our QP solver has been implemented in C++. This thesis not only covers the theoretical background related to the QP solver; it also contains the results of numerical experiments on a wide range of randomly generated test problems

    On the Convergence of L-shaped Algorithms for Two-Stage Stochastic Programming

    Full text link
    In this paper, we design, analyze, and implement a variant of the two-loop L-shaped algorithms for solving two-stage stochastic programming problems that arise from important application areas including revenue management and power systems. We consider the setting in which it is intractable to compute exact objective function and (sub)gradient information, and instead, only estimates of objective function and (sub)gradient values are available. Under common assumptions including fixed recourse and bounded (sub)gradients, the algorithm generates a sequence of iterates that converge to a neighborhood of optimality, where the radius of the convergence neighborhood depends on the level of the inexactness of objective function estimates. The number of outer and inner iterations needed to find an approximate optimal iterate is provided. Finally, we show a sample complexity result for the algorithm with a Polyak-type step-size policy that can be extended to analyze other situations. We also present a numerical study that verifies our theoretical results and demonstrates the superior empirical performance of our proposed algorithms over classic solvers.Comment: 39 pages, 2 figure

    Limited-Memory BFGS with Displacement Aggregation

    Get PDF
    A displacement aggregation strategy is proposed for the curvature pairs stored in a limited-memory BFGS method such that the resulting (inverse) Hessian approximations are equal to those that would be derived from a full-memory BFGS method. This means that, if a sufficiently large number of pairs are stored, then an optimization algorithm employing the limited-memory method can achieve the same theoretical convergence properties as when full-memory (inverse) Hessian approximations are stored and employed, such as a local superlinear rate of convergence under assumptions that are common for attaining such guarantees. To the best of our knowledge, this is the first work in which a local superlinear convergence rate guarantee is offered by a quasi-Newton scheme that does not either store all curvature pairs throughout the entire run of the optimization algorithm or store an explicit (inverse) Hessian approximation.Comment: 24 pages, 3 figure

    Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Optimization using Predictive Variance Reduction

    Full text link
    In this paper, we propose a stochastic method for solving equality constrained optimization problems that utilizes predictive variance reduction. Specifically, we develop a method based on the sequential quadratic programming paradigm that employs variance reduction in the gradient approximations. Under reasonable assumptions, we prove that a measure of first-order stationarity evaluated at the iterates generated by our proposed algorithm converges to zero in expectation from arbitrary starting points, for both constant and adaptive step size strategies. Finally, we demonstrate the practical performance of our proposed algorithm on constrained binary classification problems that arise in machine learning.Comment: 41 pages, 5 figures, 4 table

    Optimal scheduling for multiple description video streams in wireless multihop networks

    No full text
    International audience— In this work, we investigate the optimal system scheduling for competing multiple description (MD) video streams in a resource-limited wireless multihop network. By joint optimization of MD, rate control and multipath routing, optimal joint rate control and routing algorithm is proposed to solve this problem with constraints that arise from the MD streams among multiple users via multiple paths. We design this joint algorithm in a distributed manner that is amenable to on-line implementation for wireless networks
    • …
    corecore